Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Database
Language
Document Type
Year range
1.
Aust N Z J Psychiatry ; : 48674221089229, 2022 Mar 31.
Article in English | MEDLINE | ID: covidwho-2229897

ABSTRACT

OBJECTIVE: To examine and describe telehealth use and attitudes among mental health professionals in Australia and New Zealand during the initial stages of the COVID-19 pandemic. METHODS: Participants completed a brief online survey between May and July 2020. Participants were recruited via peak and professional organisations and through psychology-focused social media groups and networks. The survey examined frequency of telehealth use, reasons for non-use, telehealth modalities, prior use, attitudes towards use, plans for future use, and training, information or resource needs. RESULTS: A total of 528 professionals (85.2% female) participated in the survey, of which 98.9% reported using telehealth and 32.2% reported using telehealth exclusively. Respondents were less likely to use telehealth if they worked with clients experiencing complex issues (e.g. trauma), had more hours of weekly client contact, had a choice about whether to use telehealth or felt less positive about using technology. Respondents were more likely to hold positive views towards telehealth if they were female, had used online programmes with clients previously, were frequent telehealth users and were comfortable using technology. Participants expressed mixed views on client safety and the impact of telehealth on therapeutic process and effectiveness. CONCLUSION: Telehealth has a clear and ongoing role within mental healthcare and there is a need for strong guidance for professionals on how to manage client risk, privacy, security and adapt therapy for delivery via telehealth. In particular, there is a need for individual-, organisational-, professional- and policy-level responses to ensure that telehealth remains a viable and effective healthcare medium into the future.

2.
JMIR Ment Health ; 8(6): e24668, 2021 Jun 10.
Article in English | MEDLINE | ID: covidwho-1290553

ABSTRACT

BACKGROUND: Uncertainty surrounds the ethical and legal implications of algorithmic and data-driven technologies in the mental health context, including technologies characterized as artificial intelligence, machine learning, deep learning, and other forms of automation. OBJECTIVE: This study aims to survey empirical scholarly literature on the application of algorithmic and data-driven technologies in mental health initiatives to identify the legal and ethical issues that have been raised. METHODS: We searched for peer-reviewed empirical studies on the application of algorithmic technologies in mental health care in the Scopus, Embase, and Association for Computing Machinery databases. A total of 1078 relevant peer-reviewed applied studies were identified, which were narrowed to 132 empirical research papers for review based on selection criteria. Conventional content analysis was undertaken to address our aims, and this was supplemented by a keyword-in-context analysis. RESULTS: We grouped the findings into the following five categories of technology: social media (53/132, 40.1%), smartphones (37/132, 28%), sensing technology (20/132, 15.1%), chatbots (5/132, 3.8%), and miscellaneous (17/132, 12.9%). Most initiatives were directed toward detection and diagnosis. Most papers discussed privacy, mainly in terms of respecting the privacy of research participants. There was relatively little discussion of privacy in this context. A small number of studies discussed ethics directly (10/132, 7.6%) and indirectly (10/132, 7.6%). Legal issues were not substantively discussed in any studies, although some legal issues were discussed in passing (7/132, 5.3%), such as the rights of user subjects and privacy law compliance. CONCLUSIONS: Ethical and legal issues tend to not be explicitly addressed in empirical studies on algorithmic and data-driven technologies in mental health initiatives. Scholars may have considered ethical or legal matters at the ethics committee or institutional review board stage. If so, this consideration seldom appears in published materials in applied research in any detail. The form itself of peer-reviewed papers that detail applied research in this field may well preclude a substantial focus on ethics and law. Regardless, we identified several concerns, including the near-complete lack of involvement of mental health service users, the scant consideration of algorithmic accountability, and the potential for overmedicalization and techno-solutionism. Most papers were published in the computer science field at the pilot or exploratory stages. Thus, these technologies could be appropriated into practice in rarely acknowledged ways, with serious legal and ethical implications.

SELECTION OF CITATIONS
SEARCH DETAIL